20 research outputs found

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    Improving user experience of eye tracking-based interaction: Introspecting and adapting interfaces

    No full text
    Eye tracking systems have greatly improved in recent years, being a viable and affordable option as digital communication channel, especially for people lacking fine motor skills. Using eye tracking as an input method is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and typing methods. However, these methods eventually need to be assimilated to enable users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs unnecessary interaction and visual overhead for users, aggravating the entire experience of gaze-based computer access. We discuss how the knowledge about the interface semantics can help reducing the interaction and visual overhead to improve the user experience. Thus, we propose the efficient introspection of interfaces to retrieve the interface semantics and adapt the interaction with eye gaze. We have developed a Web browser, GazeTheWeb, that introspects Web page interfaces and adapts both the browser interface and the interaction elements on Web pages for gaze input. In a summative lab study with 20 participants, GazeTheWeb allowed the participants to accomplish information search and browsing tasks significantly faster than an emulation approach. Additional feasibility tests of GazeTheWeb in lab and home environment showcase its effectiveness in accomplishing daily Web browsing activities and adapting large variety of modern Web pages to suffice the interaction for people with motor impairment

    CNVVE: Dataset and Benchmark for Classifying Non-verbal Voice

    No full text
    [3]R. Hedeshy, R. Menges, ~. . In

    Eye-controlled interfaces for multimedia interaction

    No full text
    The EU-funded MAMEM project (Multimedia Authoring and Management using your Eyes and Mind) aims to propose a framework for natural interaction with multimedia information for users who lack fine motor skills. As part of this project, the authors have developed a gaze-based control paradigm. Here, they outline the challenges of eye-controlled interaction with multimedia information and present initial project results. Their objective is to investigate how eye-based interaction techniques can be made precise and fast enough to let disabled people easily interact with multimedia information

    Assessing the usability of gaze-adapted Interface against conventional eye-based input emulation

    No full text
    In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze input, and compare it to Twitter in a conventional browser interface with gaze-based mouse and keyboard emulation. We conducted an experimental study, which indicates a significantly better subjective user experience for the gaze-adapted approach. Based on the results, we argue the need of user interfaces interacting directly to eye gaze input to provide an improved user experience, more specifically in the field of accessibility

    eyeGUI: a novel framework for eye-controlled user interfaces

    No full text
    The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a substantial feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many significant aspects, like rendering, layout, dynamic modification of content, support of graphics and animation

    Hummer: Text entry by Gaze and Hum

    No full text
    (number 9988), has been conditionally accepted for inclusion in the 2021 ACM Conference on Human Factors in Computing Systems (CHI 2021

    GazeTheKey: interactive keys to integrate word predictions for gaze-based text entry

    No full text
    In the conventional keyboard interfaces for eye typing, the functionalities of the virtual keys are static, i.e., user’s gaze at a particular key simply translates the associated letter as user’s input. In this work we argue the keys to be more dynamic and embed intelligent predictions to support gazebased text entry. In this regard, we demonstrate a novel "GazeTheKey" interface where a key not only signifies the input character, but also predict the relevant words that could be selected by user’s gaze utilizing a two-step dwell time

    Chromium based Framework to include Gaze Interaction in Web Browser

    No full text
    EnablingWeb interaction by non-conventional input sources like eyes has great potential to enhance Web accessibility. In this paper, we present a Chromium based inclusive framework to adapt eye gaze events in Web interfaces. The frameworkprovides more utility and control to develop a full featured interactive browser, compared to the related approaches of additional mouse and keyboard emulation orbrowser extensions. We demonstrate the framework through a sophisticated gaze driven Web browser, which effectively supports all browsing operations like search, navigation, bookmarks, and tab management
    corecore